Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 56
Filtrar
1.
Ergonomics ; 66(8): 1132-1141, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-36227226

RESUMEN

Observer, manual single-frame video, and automated computer vision measures of the Hand Activity Level (HAL) were compared. HAL can be measured three ways: (1) observer rating (HALO), (2) calculated from single-frame multimedia video task analysis for measuring frequency (F) and duty cycle (D) (HALF), or (3) from automated computer vision (HALC). This study analysed videos collected from three prospective cohort studies to ascertain HALO, HALF, and HALC for 419 industrial videos. Although the differences for the three methods were relatively small on average (<1), they were statistically significant (p < .001). A difference between the HALC and HALF ratings within ±1 point on the HAL scale was the most consistent, where more than two thirds (68%) of all the cases were within that range and had a linear regression through the mean coefficient of 1.03 (R2 = 0.89). The results suggest that the computer vision methodology yields comparable results as single-frame video analysis.Practitioner summary: The ACGIH Hand Activity Level (HAL) was obtained for 419 industrial tasks using three methods: observation, calculated using single-frame video analysis and computer vision. The computer vision methodology produced results that were comparable to single-frame video analysis.


Asunto(s)
Mano , Análisis y Desempeño de Tareas , Humanos , Estudios Prospectivos , Extremidad Superior , Computadores , Grabación en Video/métodos
2.
Hum Factors ; : 187208221093829, 2022 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-35548929

RESUMEN

OBJECTIVE: The effect of camera viewpoint was studied when performing visually obstructed psychomotor targeting tasks. BACKGROUND: Previous research in laparoscopy and robotic teleoperation found that complex perceptual-motor adaptations associated with misaligned viewpoints corresponded to degraded performance in manipulation. Because optimal camera positioning is often unavailable in restricted environments, alternative viewpoints that might mitigate performance effects are not obvious. METHODS: A virtual keyboard-controlled targeting task was remotely distributed to workers of Amazon Mechanical Turk. The experiment was performed by 192 subjects for a static viewpoint with independent parameters of target direction, Fitts' law index of difficulty, viewpoint azimuthal angle (AA), and viewpoint polar angle (PA). A dynamic viewpoint experiment was also performed by 112 subjects in which the viewpoint AA changed after every trial. RESULTS: AA and target direction had significant effects on performance for the static viewpoint experiment. Movement time and travel distance increased while AA increased until there was a discrete improvement in performance for 180°. Increasing AA from 225° to 315° linearly decreased movement time and distance. There were significant main effects of current AA and magnitude of transition for the dynamic viewpoint experiment. Orthogonal direction and no-change viewpoint transitions least affected performance. CONCLUSIONS: Viewpoint selection should aim to minimize associated rotations within the manipulation plane when performing targeting tasks whether implementing a static or dynamic viewing solution. Because PA rotations had negligible performance effects, PA adjustments may extend the space of viable viewpoints. APPLICATIONS: These results can inform viewpoint selection for visual feedback during psychomotor tasks.

3.
Hum Factors ; : 187208221077722, 2022 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-35345922

RESUMEN

OBJECTIVE: Trade-offs between productivity, physical workload (PWL), and mental workload (MWL) were studied when integrating collaborative robots (cobots) into existing manual work by optimizing the allocation of tasks. BACKGROUND: As cobots become more widely introduced in the workplace and their capabilities greatly improved, there is a need to consider how they can best help their human partners. METHODS: A theoretical data-driven analysis was conducted using the O*NET Content Model to evaluate 16 selected jobs for associated work context, skills, and constraints. Associated work activities were ranked by potential for substitution by a cobot. PWL and MWL were estimated using variables from the O*Net database that represent variables for the Strain Index and NASA-TLX. An algorithm was developed to optimize work activity assignment to cobots and human workers according to their most suited abilities. RESULTS: Human workload for some jobs decreased while workload for some jobs increased after cobots were reassigned tasks, and residual human capacity was used to perform job activities designated the most important to increase productivity. The human workload for other jobs remained unchanged. CONCLUSIONS: The changes in human workload from the introduction of cobots may not always be beneficial for the human worker unless trade-offs are considered.Application: The framework of this study may be applied to existing jobs to identify the relationship between productivity and worker tolerances that integrate cobots into specific tasks.

4.
Hum Factors ; 64(2): 265-268, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35025608

RESUMEN

Today's challenges for scientific publications require operating at a time when trust in science depends upon effective vetting of data, identifying questionable practices, and scrutinizing research. The Editor-in-Chief has an invaluable opportunity to influence the direction and reputation of our field but also has the responsibility to confront contemporary trends that threaten the publication of quality research. The editor is responsible for maintaining strict scientific standards for the journal through the exercise of good judgment and steadfast commitment to upholding the highest ethical principles. Opportunities exist to create and implement new initiatives for improving the peer review process and elevating the journal's stature. The journal must address the challenges as well as effectively communicate with the public, who seek a reliable source of information.


Asunto(s)
Internet , Humanos
5.
Hum Factors ; 64(3): 482-498, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-32972247

RESUMEN

OBJECTIVE: A computer vision method was developed for estimating the trunk flexion angle, angular speed, and angular acceleration by extracting simple features from the moving image during lifting. BACKGROUND: Trunk kinematics is an important risk factor for lower back pain, but is often difficult to measure by practitioners for lifting risk assessments. METHODS: Mannequins representing a wide range of hand locations for different lifting postures were systematically generated using the University of Michigan 3DSSPP software. A bounding box was drawn tightly around each mannequin and regression models estimated trunk angles. The estimates were validated against human posture data for 216 lifts collected using a laboratory-grade motion capture system and synchronized video recordings. Trunk kinematics, based on bounding box dimensions drawn around the subjects in the video recordings of the lifts, were modeled for consecutive video frames. RESULTS: The mean absolute difference between predicted and motion capture measured trunk angles was 14.7°, and there was a significant linear relationship between predicted and measured trunk angles (R2 = .80, p < .001). The training error for the kinematics model was 2.3°. CONCLUSION: Using simple computer vision-extracted features, the bounding box method indirectly estimated trunk angle and associated kinematics, albeit with limited precision. APPLICATION: This computer vision method may be implemented on handheld devices such as smartphones to facilitate automatic lifting risk assessments in the workplace.


Asunto(s)
Elevación , Torso , Fenómenos Biomecánicos , Computadores , Humanos , Postura
6.
Simul Healthc ; 16(6): e188-e193, 2021 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-34860738

RESUMEN

INTRODUCTION: Previous efforts used digital video to develop computer-generated assessments of surgical hand motion economy and fluidity of motion. This study tests how well previously trained assessment models match expert ratings of suturing and tying video clips recorded in a new operating room (OR) setting. METHODS: Enabled through computer vision of the hands, this study tests the applicability of assessments born out of benchtop simulations to in vivo suturing and tying tasks recorded in the OR. RESULTS: Compared with expert ratings, computer-generated assessments for fluidity of motion (slope = 0.83, intercept = 1.77, R2 = 0.55) performed better than motion economy (slope = 0.73, intercept = 2.04, R2 = 0.49), although 85% of ratings for both models were within ±2 of the expert response. Neither assessment performed as well in the OR as they did on the training data. Assessments were sensitive to changing hand postures, dropped ligatures, and poor tissue contact-features typically missing from training data. Computer-generated assessment of OR tasks was contingent on a clear, consistent view of both surgeon's hands. CONCLUSIONS: Computer-generated assessment may help provide formative feedback during deliberate practice, albeit with greater variability in the OR compared with benchtop simulations. Future work will benefit from expanded available bimanual video records.


Asunto(s)
Competencia Clínica , Técnicas de Sutura , Humanos , Quirófanos
7.
Appl Ergon ; 97: 103531, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34273816

RESUMEN

Worker posture, task time and performance are often affected when one-handed manual dexterous tasks are performed in small overhead spaces under an obscured view. A common method used for supplementing visual feedback in these cases is a hand-held telescopic mirror, but that involves working with both arms extended overhead, and is often accompanied by awkward neck and shoulder postures. A video camera was considered as an alternative to using a mirror for visual feedback and reducing overhead reach. A mirror, a borescope and an omnidirectional camera were evaluated while laboratory participants performed three one-handed simulated manufacturing tasks in a small overhead enclosure. Videos were recorded for quantifying the time that postures were assumed while performing the tasks. The average time that both arms were above mid-shoulder height for the omnidirectional camera was more than 2.5 times less than for the mirror and borescope. The average proportion of neck strain time was 0.01% (or less) for both the omnidirectional camera and the borescope, compared to 83.68% for the mirror. No significant differences were observed in task completion times between the three modalities. Hence, an omnidirectional camera can provide visibility while reducing straining postures for manufacturing operations involving overhead work.


Asunto(s)
Postura , Hombro , Brazo , Retroalimentación , Humanos , Cuello
8.
IEEE Trans Hum Mach Syst ; 51(6): 734-739, 2021 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35677387

RESUMEN

A robust computer vision-based approach is developed to estimate the load asymmetry angle defined in the revised NIOSH lifting equation (RNLE). The angle of asymmetry enables the computation of a recommended weight limit for repetitive lifting operations in a workplace to prevent lower back injuries. An open-source package OpenPose is applied to estimate the 2D locations of skeletal joints of the worker from two synchronous videos. Combining these joint location estimates, a computer vision correspondence and depth estimation method is developed to estimate the 3D coordinates of skeletal joints during lifting. The angle of asymmetry is then deduced from a subset of these 3D positions. Error analysis reveals unreliable angle estimates due to occlusions of upper limbs. A robust angle estimation method that mitigates this challenge is developed. We propose a method to flag unreliable angle estimates based on the average confidence level of 2D joint estimates provided by OpenPose. An optimal threshold is derived that balances the percentage variance reduction of the estimation error and the percentage of angle estimates flagged. Tested with 360 lifting instances in a NIOSH-provided dataset, the standard deviation of angle estimation error is reduced from 10.13° to 4.99°. To realize this error variance reduction, 34% of estimated angles are flagged and require further validation.

9.
Hum Factors ; 63(7): 1169-1181, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-32286884

RESUMEN

OBJECTIVE: Surgeon tremor was measured during vitreoretinal microscopic surgeries under different hand support conditions. BACKGROUND: While the ophthalmic surgeon's forearm is supported using a standard symmetric wrist rest when operating on the patient's same side as the dominant hand (SSD), the surgeon's hand is placed directly on the patient's forehead when operating on the contralateral side of the dominant hand (CSD). It was hypothesized that more tremor is associated with CSD surgeries than SSD surgeries and that, using an experimental asymmetric wrist rest where the contralateral wrist bar gradually rises and curves toward the patient's operative eye, there is no difference in tremor associated with CSD and SSD surgeries. METHODS: Seventy-six microscope videos, recorded from three surgeons performing macular membrane peeling operations, were analyzed using marker-less motion tracking, and movement data (instrument path length and acceleration) were recorded. Tremor acceleration frequency and magnitude were measured using spectral analysis. Following 47 surgeries using a conventional symmetric wrist support, surgeons incorporated the experimental asymmetric wrist rest into their surgical routine. RESULTS: There was 0.11 mm/s2 (22%) greater (p = .05) average tremor acceleration magnitude for CSD surgeries (0.62 mm/s2, SD = 0.08) than SSD surgeries (0.51 mm/s2, SD = 0.09) for the symmetric wrist rest, while no significant (p > .05) differences were observed (0.57 mm, SD = 0.13 for SSD and 0.58 mm, SD = 0.11 for CSD surgeries) for the experimental asymmetric wrist rest. CONCLUSION: The asymmetric wrist support reduced the difference in tremor acceleration between CSD and SSD surgeries.


Asunto(s)
Temblor , Cirugía Vitreorretiniana , Mano , Humanos , Muñeca , Articulación de la Muñeca
10.
SLAS Technol ; 26(3): 320-326, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-33089763

RESUMEN

Technicians in a commercial laboratory manually uncap up to 700 sample tubes daily in preparation for bioanalytical testing. Manually twisting off sample tube caps not only is a time-consuming task, but also poses increased risk for muscle fatigue and repetitive-motion injuries. An automated device capable of uncapping sample tubes at a rate faster than the current workflow would be valuable for minimizing strain on technicians' hands and saving time. Although several commercial sample tube-uncapping products exist, they are not always usable for a workload that uses a mix of tube sizes and specific workflow. A functioning uncapping device was developed that can semi-automatically uncap sample tubes with three different heights and diameters and was compatible with the workflow in a commercial laboratory setting. Under limited testing, the average success rate with uncapping each of the three sample tube sizes or a mix of them was 90% or more, more than three times faster than manual uncapping, and met standard acceptance criteria using mass spectrometry. Our device with its current performance is still a prototype, requiring further development. It showed promise for ergonomic benefit to the laboratory technicians, however, reducing the necessity to manually unscrew caps.


Asunto(s)
Flujo de Trabajo , Espectrometría de Masas
12.
Appl Ergon ; 87: 103136, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32501255

RESUMEN

This paper compares clinician hand motion for common suturing tasks across a range of experience levels and tissue types. Medical students (32), residents (41), attending surgeons (10), and retirees (2) were recorded on digital video while suturing on one of: foam, pig feet, or porcine bowel. Depending on time in position, each medical student, resident, and attending participant was classified as junior or senior, yielding six experience categories. This work focuses on trends associated with increasing tenure observed from those medical students (10), residents (15), and attendings (10) who sutured on foam, and draws comparison across tissue types where pertinent. Utilizing custom software, the two-dimensional location of each of the participant's hands were automatically recorded in every video frame, producing a rich spatiotemporal feature set. While suturing on foam, increasing clinician experience was associated with conserved path length per cycle of the non-dominant hand, significantly reducing from junior medical students (mean = 73.63 cm, sd = 33.21 cm) to senior residents (mean = 46.16 cm, sd = 14.03 cm, p = 0.015), and again between senior residents and senior attendings (mean = 30.84 cm, sd = 14.51 cm, p = 0.045). Despite similar maneuver rates, attendings also accelerated less with their non-dominant hand (mean = 16.27 cm/s2, sd = 81.12 cm/s2, p = 0.002) than senior residents (mean = 24.84 cm/s2, sd = 68.29 cm/s2, p = 0.002). While tying, medical students moved their dominant hands slower (mean = 4.39 cm/s, sd = 1.73 cm/s, p = 0.033) than senior residents (mean = 6.53 cm/s, sd = 2.52 cm/s). These results suggest that increased psychomotor performance during early training manifest through faster dominant hand function, while later increases are characterized by conserving energy and efficiently distributing work between hands. Incorporating this scalable video-based motion analysis into regular formative assessment routines may enable greater quality and consistency of feedback throughout a surgical career.


Asunto(s)
Competencia Clínica , Mano/fisiología , Cirujanos , Técnicas de Sutura , Trabajo/fisiología , Adulto , Fenómenos Biomecánicos , Femenino , Humanos , Internado y Residencia , Masculino , Persona de Mediana Edad , Movimiento (Física) , Desempeño Psicomotor , Entrenamiento Simulado , Estudiantes de Medicina , Análisis y Desempeño de Tareas
13.
Appl Ergon ; 85: 103061, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32174349

RESUMEN

Workers in hospitals, clinics, and contract research organizations who repetitively use syringes have an increased risk for musculoskeletal disorders. This study developed and tested a novel syringe adapter designed to reduce muscle strain associated with repetitive fluid draws. Three syringe plunger extension methods (ring-finger, middle-finger, and syringe adapter) were studied across twenty participants. Electromyogram signals for the flexor digitorum superficialis and extensor digitorum muscles were recorded. The syringe adapter required 31% of the 90th percentile flexor muscle activity as compared to the ring-finger syringe extension method, and 45% the 90th percentile flexor muscle activity as compared to the middle-finger method (p < 0.001). The greatest differences were observed when the syringe was near full extension. Although the syringe adapter took more time than the other syringe extension methods (1.5 times greater), it greatly helped reduce physical stress associated with repetitive, awkward syringe procedures.


Asunto(s)
Diseño de Equipo , Ergonomía , Enfermedades Profesionales/prevención & control , Esguinces y Distensiones/prevención & control , Jeringas , Fenómenos Biomecánicos , Trastornos de Traumas Acumulados/etiología , Trastornos de Traumas Acumulados/prevención & control , Electromiografía , Femenino , Dedos/fisiología , Mano/fisiología , Humanos , Personal de Laboratorio , Masculino , Fatiga Muscular/fisiología , Músculo Esquelético/fisiología , Enfermedades Musculoesqueléticas/etiología , Enfermedades Musculoesqueléticas/prevención & control , Enfermedades Profesionales/etiología , Esguinces y Distensiones/etiología , Jeringas/efectos adversos , Adulto Joven
14.
Ergonomics ; 62(8): 1043-1054, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31092146

RESUMEN

A widely used risk prediction tool, the revised NIOSH lifting equation (RNLE), provides the recommended weight limit (RWL), but is limited by analyst subjectivity, experience, and resources. This paper describes a robust, non-intrusive, straightforward approach to automatically extract spatial and temporal factors necessary for the RNLE using a single video camera in the sagittal plane. The participant's silhouette is segmented by motion information and the novel use of a ghosting effect provides accurate detection of lifting instances, and hand and feet location prediction. Laboratory tests using 6 participants, each performing 36 lifts, showed that a nominal 640 pixel × 480 pixel 2D video, in comparison to 3D motion capture, provided RWL estimations within 0.2 kg (SD = 1.0 kg). The linear regression between the video and 3D tracking RWL was R2 = 0.96 (slope = 1.0, intercept = 0.2 kg). Since low definition video was used in order to synchronise with motion capture, better performance is anticipated using high definition video. Practitioner's summary: An algorithm for automatically calculating the revised NIOSH lifting equation using a single video camera was evaluated in comparison to laboratory 3D motion capture. The results indicate that this method has suitable accuracy for practical use and may be, particularly, useful when multiple lifts are evaluated. Abbreviations: 2D: Two-dimensional; 3D: Three-dimensional; ACGIH: American Conference of Governmental Industrial Hygienists; AM: asymmetric multiplier; BOL: beginning of lift; CM: coupling multiplier; DM: distance multiplier; EOL: end of lift; FIRWL: frequency independent recommended weight limit; FM: frequency multiplier; H: horizontal distance; HM: horizontal multiplier; IMU: inertial measurement unit; ISO: International Organization for Standardization; LC: load constant; NIOSH: National Institute for Occupational Safety and Health; RGB: red, green, blue; RGB-D: red, green, blue - depth; RNLE: revised NIOSH lifting equation; RWL: recommended weight limit; SD: standard deviation; TLV: threshold limit value; VM: vertical multiplier; V: vertical distance.


Asunto(s)
Ergonomía/métodos , Elevación , Monitoreo Fisiológico/métodos , Salud Laboral , Grabación en Video/métodos , Adulto , Femenino , Humanos , Modelos Lineales , Masculino , National Institute for Occupational Safety and Health, U.S. , Medición de Riesgo , Estados Unidos
15.
Hum Factors ; 61(8): 1326-1339, 2019 12.
Artículo en Inglés | MEDLINE | ID: mdl-31013463

RESUMEN

OBJECTIVE: This study explores how common machine learning techniques can predict surgical maneuvers from a continuous video record of surgical benchtop simulations. BACKGROUND: Automatic computer vision recognition of surgical maneuvers (suturing, tying, and transition) could expedite video review and objective assessment of surgeries. METHOD: We recorded hand movements of 37 clinicians performing simple and running subcuticular suturing benchtop simulations, and applied three machine learning techniques (decision trees, random forests, and hidden Markov models) to classify surgical maneuvers every 2 s (60 frames) of video. RESULTS: Random forest predictions of surgical video correctly classified 74% of all video segments into suturing, tying, and transition states for a randomly selected test set. Hidden Markov model adjustments improved the random forest predictions to 79% for simple interrupted suturing on a subset of randomly selected participants. CONCLUSION: Random forest predictions aided by hidden Markov modeling provided the best prediction of surgical maneuvers. Training of models across all users improved prediction accuracy by 10% compared with a random selection of participants. APPLICATION: Marker-less video hand tracking can predict surgical maneuvers from a continuous video record with similar accuracy as robot-assisted surgical platforms, and may enable more efficient video review of surgical procedures for training and coaching.


Asunto(s)
Mano , Interpretación de Imagen Asistida por Computador , Aprendizaje Automático , Destreza Motora , Reconocimiento de Normas Patrones Automatizadas , Procedimientos Quirúrgicos Operativos , Humanos , Grabación en Video
16.
Hum Factors ; 61(1): 64-77, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30091947

RESUMEN

OBJECTIVE: A method for automatically classifying lifting postures from simple features in video recordings was developed and tested. We explored if an "elastic" rectangular bounding box, drawn tightly around the subject, can be used for classifying standing, stooping, and squatting at the lift origin and destination. BACKGROUND: Current marker-less video tracking methods depend on a priori skeletal human models, which are prone to error from poor illumination, obstructions, and difficulty placing cameras in the field. Robust computer vision algorithms based on spatiotemporal features were previously applied for evaluating repetitive motion tasks, exertion frequency, and duty cycle. METHODS: Mannequin poses were systematically generated using the Michigan 3DSSPP software for a wide range of hand locations and lifting postures. The stature-normalized height and width of a bounding box were measured in the sagittal plane and when rotated horizontally by 30°. After randomly ordering the data, a classification and regression tree algorithm was trained to classify the lifting postures. RESULTS: The resulting tree had four levels and four splits, misclassifying 0.36% training-set cases. The algorithm was tested using 30 video clips of industrial lifting tasks, misclassifying 3.33% test-set cases. The sensitivity and specificity, respectively, were 100.0% and 100.0% for squatting, 90.0% and 100.0% for stooping, and 100.0% and 95.0% for standing. CONCLUSIONS: The tree classification algorithm is capable of classifying lifting postures based only on dimensions of bounding boxes. APPLICATIONS: It is anticipated that this practical algorithm can be implemented on handheld devices such as a smartphone, making it readily accessible to practitioners.


Asunto(s)
Elevación , Postura/fisiología , Análisis y Desempeño de Tareas , Algoritmos , Fenómenos Biomecánicos , Árboles de Decisión , Humanos , Maniquíes , Reproducibilidad de los Resultados , Grabación en Video
17.
Ann Surg ; 269(3): 574-581, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-28885509

RESUMEN

OBJECTIVE: Computer vision was used to predict expert performance ratings from surgeon hand motions for tying and suturing tasks. SUMMARY BACKGROUND DATA: Existing methods, including the objective structured assessment of technical skills (OSATS), have proven reliable, but do not readily discriminate at the task level. Computer vision may be used for evaluating distinct task performance throughout an operation. METHODS: Open surgeries was videoed and surgeon hands were tracked without using sensors or markers. An expert panel of 3 attending surgeons rated tying and suturing video clips on continuous scales from 0 to 10 along 3 task measures adapted from the broader OSATS: motion economy, fluidity of motion, and tissue handling. Empirical models were developed to predict the expert consensus ratings based on the hand kinematic data records. RESULTS: The predicted versus panel ratings for suturing had slopes from 0.73 to 1, and intercepts from 0.36 to 1.54 (Average R2 = 0.81). Predicted versus panel ratings for tying had slopes from 0.39 to 0.88, and intercepts from 0.79 to 4.36 (Average R2 = 0.57). The mean square error among predicted and expert ratings was consistently less than the mean squared difference among individual expert ratings and the eventual consensus ratings. CONCLUSIONS: The computer algorithm consistently predicted the panel ratings of individual tasks, and were more objective and reliable than individual assessment by surgical experts.


Asunto(s)
Inteligencia Artificial , Competencia Clínica , Técnicas de Sutura , Análisis y Desempeño de Tareas , Algoritmos , Fenómenos Biomecánicos , Femenino , Mano/fisiología , Humanos , Masculino , Modelos Teóricos , Variaciones Dependientes del Observador , Reproducibilidad de los Resultados , Grabación en Video
18.
Hum Factors ; 59(5): 844-860, 2017 08.
Artículo en Inglés | MEDLINE | ID: mdl-28704631

RESUMEN

Objective This research considers how driver movements in video clips of naturalistic driving are related to observer subjective ratings of distraction and engagement behaviors. Background Naturalistic driving video provides a unique window into driver behavior unmatched by crash data, roadside observations, or driving simulator experiments. However, manually coding many thousands of hours of video is impractical. An objective method is needed to identify driver behaviors suggestive of distracted or disengaged driving for automated computer vision analysis to access this rich source of data. Method Visual analog scales ranging from 0 to 10 were created, and observers rated their perception of driver distraction and engagement behaviors from selected naturalistic driving videos. Driver kinematics time series were extracted from frame-by-frame coding of driver motions, including head rotation, head flexion/extension, and hands on/off the steering wheel. Results The ratings were consistent among participants. A statistical model predicting average ratings from the kinematic features accounted for 54% of distraction rating variance and 50% of engagement rating variance. Conclusion Rated distraction behavior was positively related to the magnitude of head rotation and fraction of time the hands were off the wheel. Rated engagement behavior was positively related to the variation of head rotation and negatively related to the fraction of time the hands were off the wheel. Application If automated computer vision can code simple kinematic features, such as driver head and hand movements, then large-volume naturalistic driving videos could be automatically analyzed to identify instances when drivers were distracted or disengaged.


Asunto(s)
Atención/fisiología , Conducción de Automóvil , Actividad Motora/fisiología , Psicometría/métodos , Desempeño Psicomotor/fisiología , Adulto , Fenómenos Biomecánicos , Humanos
19.
Ergonomics ; 60(12): 1730-1738, 2017 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-28640656

RESUMEN

Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.


Asunto(s)
Algoritmos , Mano/fisiología , Esfuerzo Físico , Estudios de Tiempo y Movimiento , Computadores , Humanos , Grabación en Video
20.
Appl Ergon ; 65: 461-472, 2017 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-28284701

RESUMEN

Patterns of physical stress exposure are often difficult to measure, and the metrics of variation and techniques for identifying them is underdeveloped in the practice of occupational ergonomics. Computer vision has previously been used for evaluating repetitive motion tasks for hand activity level (HAL) utilizing conventional 2D videos. The approach was made practical by relaxing the need for high precision, and by adopting a semi-automatic approach for measuring spatiotemporal characteristics of the repetitive task. In this paper, a new method for visualizing task factors, using this computer vision approach, is demonstrated. After videos are made, the analyst selects a region of interest on the hand to track and the hand location and its associated kinematics are measured for every frame. The visualization method spatially deconstructs and displays the frequency, speed and duty cycle components of tasks that are part of the threshold limit value for hand activity for the purpose of identifying patterns of exposure associated with the specific job factors, as well as for suggesting task improvements. The localized variables are plotted as a heat map superimposed over the video, and displayed in the context of the task being performed. Based on the intensity of the specific variables used to calculate HAL, we can determine which task factors most contribute to HAL, and readily identify those work elements in the task that contribute more to increased risk for an injury. Work simulations and actual industrial examples are described. This method should help practitioners more readily measure and interpret temporal exposure patterns and identify potential task improvements.


Asunto(s)
Ergonomía/métodos , Análisis y Desempeño de Tareas , Grabación en Video/métodos , Fenómenos Biomecánicos , Trastornos de Traumas Acumulados/etiología , Mano/fisiología , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Movimiento (Física) , Movimiento/fisiología , Enfermedades Profesionales/etiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...